The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Image-text retrieval (ITR) is a challenging task in the field of multimodal information processing due to the semantic gap between different modalities. In recent years, researchers have made great progress in exploring the accurate alignment between image and text. However, existing works mainly focus on the fine-grained alignment between image regions and sentence fragments, which ignores the guiding significance of context background information. Actually, integrating the local fine-grained information and global context background information can provide more semantic clues for retrieval. In this paper, we propose a novel Hierarchical Graph Alignment Network (HGAN) for image-text retrieval. First, to capture the comprehensive multimodal features, we construct the feature graphs for the image and text modality respectively. Then, a multi-granularity shared space is established with a designed Multi-granularity Feature Aggregation and Rearrangement (MFAR) module, which enhances the semantic corresponding relations between the local and global information, and obtains more accurate feature representations for the image and text modalities. Finally, the ultimate image and text features are further refined through three-level similarity functions to achieve the hierarchical alignment. To justify the proposed model, we perform extensive experiments on MS-COCO and Flickr30K datasets. Experimental results show that the proposed HGAN outperforms the state-of-the-art methods on both datasets, which demonstrates the effectiveness and superiority of our model.
translated by 谷歌翻译
Monocular depth estimation has been actively studied in fields such as robot vision, autonomous driving, and 3D scene understanding. Given a sequence of color images, unsupervised learning methods based on the framework of Structure-From-Motion (SfM) simultaneously predict depth and camera relative pose. However, dynamically moving objects in the scene violate the static world assumption, resulting in inaccurate depths of dynamic objects. In this work, we propose a new method to address such dynamic object movements through monocular 3D object detection. Specifically, we first detect 3D objects in the images and build the per-pixel correspondence of the dynamic pixels with the detected object pose while leaving the static pixels corresponding to the rigid background to be modeled with camera motion. In this way, the depth of every pixel can be learned via a meaningful geometry model. Besides, objects are detected as cuboids with absolute scale, which is used to eliminate the scale ambiguity problem inherent in monocular vision. Experiments on the KITTI depth dataset show that our method achieves State-of-The-Art performance for depth estimation. Furthermore, joint training of depth, camera motion and object pose also improves monocular 3D object detection performance. To the best of our knowledge, this is the first work that allows a monocular 3D object detection network to be fine-tuned in a self-supervised manner.
translated by 谷歌翻译
Deep Neural Networks (DNNs) have been ubiquitously adopted in internet of things and are becoming an integral of our daily life. When tackling the evolving learning tasks in real world, such as classifying different types of objects, DNNs face the challenge to continually retrain themselves according to the tasks on different edge devices. Federated continual learning is a promising technique that offers partial solutions but yet to overcome the following difficulties: the significant accuracy loss due to the limited on-device processing, the negative knowledge transfer caused by the limited communication of non-IID data, and the limited scalability on the tasks and edge devices. In this paper, we propose FedKNOW, an accurate and scalable federated continual learning framework, via a novel concept of signature task knowledge. FedKNOW is a client side solution that continuously extracts and integrates the knowledge of signature tasks which are highly influenced by the current task. Each client of FedKNOW is composed of a knowledge extractor, a gradient restorer and, most importantly, a gradient integrator. Upon training for a new task, the gradient integrator ensures the prevention of catastrophic forgetting and mitigation of negative knowledge transfer by effectively combining signature tasks identified from the past local tasks and other clients' current tasks through the global model. We implement FedKNOW in PyTorch and extensively evaluate it against state-of-the-art techniques using popular federated continual learning benchmarks. Extensive evaluation results on heterogeneous edge devices show that FedKNOW improves model accuracy by 63.24% without increasing model training time, reduces communication cost by 34.28%, and achieves more improvements under difficult scenarios such as large numbers of tasks or clients, and training different complex networks.
translated by 谷歌翻译
Bird's-Eye-View (BEV) 3D Object Detection is a crucial multi-view technique for autonomous driving systems. Recently, plenty of works are proposed, following a similar paradigm consisting of three essential components, i.e., camera feature extraction, BEV feature construction, and task heads. Among the three components, BEV feature construction is BEV-specific compared with 2D tasks. Existing methods aggregate the multi-view camera features to the flattened grid in order to construct the BEV feature. However, flattening the BEV space along the height dimension fails to emphasize the informative features of different heights. For example, the barrier is located at a low height while the truck is located at a high height. In this paper, we propose a novel method named BEV Slice Attention Network (BEV-SAN) for exploiting the intrinsic characteristics of different heights. Instead of flattening the BEV space, we first sample along the height dimension to build the global and local BEV slices. Then, the features of BEV slices are aggregated from the camera features and merged by the attention mechanism. Finally, we fuse the merged local and global BEV features by a transformer to generate the final feature map for task heads. The purpose of local BEV slices is to emphasize informative heights. In order to find them, we further propose a LiDAR-guided sampling strategy to leverage the statistical distribution of LiDAR to determine the heights of local slices. Compared with uniform sampling, LiDAR-guided sampling can determine more informative heights. We conduct detailed experiments to demonstrate the effectiveness of BEV-SAN. Code will be released.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
深度学习方法的最新突破引发了人们对基于学习的错误探测器的兴趣。与传统的静态分析工具相比,这些错误检测器是直接从数据中学到的,因此更容易创建。另一方面,它们很难训练,需要大量数据,而这些数据不容易获得。在本文中,我们提出了一种称为Meta Bug检测的新方法,该方法比现有基于学习的错误探测器具有三个至关重要的优势:Bug-Type通用(即,能够捕获在培训期间完全没有观察到的错误类型),可以自我解释(即能够在没有任何外部可解释方法的情况下解释其自身的预测)和样本有效(即,比标准错误检测器所需的培训数据要少得多)。我们的广泛评估表明,我们的元错误检测器(MBD)有效地捕获了各种错误,包括NULL指针解除,阵列索引外部漏洞,文件句柄泄漏甚至是并发程序中的数据竞赛;在此过程中,MBD还大大优于几个值得注意的基线,包括Facebook推断,一种著名的静态分析工具和FICS,即最新的异常检测方法。
translated by 谷歌翻译
事实证明,对预训练的模型进行迅速基于基于预训练的模型的微调对许多自然语言处理任务有效。但是,尚未对生物医学领域的迅速进行调整。生物医学单词在一般领域通常很少见,但在生物医学环境中无处不在,这在微观调整后即使在下游生物医学应用上都显着恶化了预训练的模型的性能,尤其是在低资源场景中。我们提出了一种简单而有效的方法,可以帮助模型在迅速调整过程中学习稀有的生物医学单词。实验结果表明,我们的方法可以使用少量的香草提示设置,无需任何额外的参数或培训步骤即可提高生物医学自然推理任务6%。
translated by 谷歌翻译
与传统的偏微分方程(PDE)求解器相比,物理知识的神经网络(PINN)可以实现较低的发展和解决成本,例如重建物理领域并解决逆问题。由于参数共享的优点,空间特征提取和低推理成本,卷积神经网络(CNN)越来越多地用于PINN中。为了使卷积PINN适应不同方程式,研究人员必须花费大量时间来调整关键的超参数。此外,尚不清楚有限差异精度,模型复杂性和网格分辨率对卷积PINN的预测结果的影响。为了填补上述研究差距,在本文中,(1)构建了多率的场pinn(MRF-PINN)模型,以适应不同的方程类型和网格分辨率,而无需手动调整。(2) MRF-PINN在三个典型的线性PDE(椭圆形,抛物线,双曲线)和非线性PDE(Navier-Stokes方程)中进行了验证。 (3)分析每个接受场对最终MRF-PINN结果的贡献,并测试有限差异精度,模型复杂性(通道数)和MES​​H分辨率对MRF-PINN结果的影响。本文表明,MRF-PINN可以适应完全不同的方程式类型和网格分辨率,而无需进行任何高参数调整。此外,在高阶有限差,较大的通道数和高网格分辨率下,解决误差显着降低,预计将成为一般卷积PINN方案。
translated by 谷歌翻译
自我监督的单眼方法可以有效地了解弱纹理表面或反射性对象的深度信息。但是,由于单眼几何建模的固有歧义,深度精度受到限制。相反,由于多视图立体声(MVS)的成功,多帧深度估计方法提高了深度准确性,后者直接使用几何约束。不幸的是,MV经常患有无纹理区域,非斜角表面和移动物体,尤其是在没有已知的相机运动和深度监督的现实世界视频序列中。因此,我们提出了MoveEpth,它利用了单眼线索和速度指导来改善多帧深度学习。与现有的MVS深度和单眼深度之间一致性的方法不同,MoveEpth通过直接解决MV的固有问题来增强多帧深度学习。我们方法的关键是利用单眼深度作为几何优先级来构建MVS成本量,并根据预测的相机速度的指导来调整成本量的深度候选。我们通过学习成本量的不确定性来进一步融合单眼深度和MVS深度,从而导致深度估计多视图几何形状的歧义。广泛的实验表明,移动eptth达到了最先进的性能:与monodepth2和packnet相比,我们的方法相对地将深度准确性提高了20 \%和19.8 \%,而Kitti基准测试的方法则提高了。 MoveEpth还推广到更具挑战性的DDAD基准测试,相对超过7.2 \%。该代码可在https://github.com/jeffwang987/movedepth上获得。
translated by 谷歌翻译